5 research outputs found

    Scaling Bounded Model Checking By Transforming Programs With Arrays

    Full text link
    Bounded Model Checking is one the most successful techniques for finding bugs in program. However, model checkers are resource hungry and are often unable to verify programs with loops iterating over large arrays.We present a transformation that enables bounded model checkers to verify a certain class of array properties. Our technique transforms an array-manipulating (ANSI-C) program to an array-free and loop-free (ANSI-C) program thereby reducing the resource requirements of a model checker significantly. Model checking of the transformed program using an off-the-shelf bounded model checker simulates the loop iterations efficiently. Thus, our transformed program is a sound abstraction of the original program and is also precise in a large number of cases - we formally characterize the class of programs for which it is guaranteed to be precise. We demonstrate the applicability and usefulness of our technique on both industry code as well as academic benchmarks

    Vitamin D Deficiency and Its Health Consequences in Africa

    Get PDF
    Africa is heterogeneous in latitude, geography, climate, food availability, religious and cultural practices, and skin pigmentation. It is expected, therefore, that prevalence of vitamin D deficiency varies widely, in line with influences on skin exposure to UVB sunshine. Furthermore, low calcium intakes and heavy burden of infectious disease common in many countries may increase vitamin D utilization and turnover. Studies of plasma 25OHD concentration indicate a spectrum from clinical deficiency to values at the high end of the physiological range; however, data are limited. Representative studies of status in different countries, using comparable analytical techniques, and of relationships between vitamin D status and risk of infectious and chronic diseases relevant to the African context are needed. Public health measures to secure vitamin D adequacy cannot encompass the whole continent and need to be developed locally

    Algorithms and modelling for large-scale Bayesian data analysis

    No full text
    Bayesian statistics has emerged as a leading paradigm for the analysis of complicated datasets and for reasoning and making predictions under uncertainty. However, the framework faces significant difficulties when it is applied at scale. As datasets grow larger, simulation-based approaches to inference become unviably expensive. As models become more complex, the naĂŻve application of traditional inference methods does not always produce valid predictions. And as Bayesian methods are applied to more complicated phenomena, it is often unclear how even to write down a suitable model on which to perform inference in the first place. This thesis presents three pieces of work aimed at addressing these problems. We describe a Markov chain Monte Carlo method whose cost per iteration does not necessarily scale linearly with the size of the dataset when applied to Bayesian big-data posteriors. We provide a rigorous analysis of this method including precise conditions under which it yields a performance benefit over standard Metropolis--Hastings. We next provide an asymptotic analysis of nested Monte Carlo schemes that are required for certain complex Bayesian models such as probabilistic programs, along with prescriptions to ensure their consistency under well-specified conditions. Finally, we consider the task of learning models from data automatically using deep generative models. We identify a limitation of normalising flow models, which are defined to be homeomorphisms and so must preserve the topology of the prior. We propose a new family of deep generative models to address this, and demonstrate its benefits empirically across a variety of datasets.</p
    corecore